International Journal of Artificial Intelligence and Machine Learning
|
Volume 1, Issue 1, July 2021 | |
Research PaperOpenAccess | |
Generative adversarial simulator |
|
Jonathan Raiman1* |
|
1Paris-Saclay University, Paris, France. Email: jonathanraiman@gmail.com | |
Int. Artif.Intell.&Mach.Learn. 1(1) (2021) 31-46 DOI: https://doi.org/10.51483/IJAIML.1.1.2021.31-46 | |
Received: 20/11/2020|Accepted: 15/05/2021|Published: 05/07/2021 |
Knowledge distillation between machine learning models has opened many new avenues for parameter count reduction, performance improvements, or amortizing training time when changing architectures between the teacher and student network. In the case of reinforcement learning, this technique has also been applied to distill teacher policies to students. Until now, policy distillation required access to a simulator or real world trajectories. In this paper we introduce a simulator-free approach to knowledge distillation in the context of reinforcement learning. A key challenge is having the student learn the multiplicity of cases that correspond to a given action. While prior work has shown that data-free knowledge distillation is possible with supervised learning models by generating synthetic examples, these approaches to are vulnerable to only producing a single prototype example for each class. We propose an extension to explicitly handle multiple observations per output class that seeks to find as many exemplars as possible for a given output class by reinitializing our data generator and making use of an adversarial loss. To the best of our knowledge, this is the first demonstration of simulator-free knowledge distillation between a teacher and a student policy. This new approach improves over the state of the art on data-free learning of student networks on benchmark datasets (MNIST, Fashion-MNIST, CIFAR-10), and we also demonstrate that it specifically tackles issues with multiple input modes. We also identify open problems when distillingagents trained in high dimensional environments such as Pong, Breakout, or Seaquest.
Keywords: Machine learning, Reinforcement learning, Student networks, Data-free learning
Full text | Download |
Copyright © SvedbergOpen. All rights reserved